Signal Identiication Using a Least L 1 Norm Algorithm
نویسنده
چکیده
errors, and are essentially independent of the 25 larger errors. This is shown clearly by the fact that the SNTLN1 solution is changed only slightly byachange in .I n contrast, the VarPro solution nal error is determined primarily by the largest data errors, and will increase in proportion to ,a s is increased. This will also be true for any approximation method based on L 2 norm minimization. It is also signiicant that SNTLN1 is able to reduce the relative error from an initial value of about 10% to about 3e-7, in an average of 6.1 iterations. This demonstrates the rapid convergence rate of the SNTLN1 algorithm for this class of problems. The robustness of the algorithm is also shown by the relatively small range for nal errors (from min to max) in Table 2, for the 10 diierent problems solved. This robust performance was observed in all test problems solved. Parameter estimation with prior knowledge of known signal poles for the quantiication of NMR spectroscopy data in the time domain. The diierentiation of pseudo-inverses and nonlinear least squares problems whose variables separate. A variable projectionmethod for solving separable nonlinear least squares problems. Total least norm for linear and nonlinearstructured problems, Recentadvances in total least squares techniques and errors-in-variables modeling. Formulation and solution of structured total least norm problems for parameterestimation.
منابع مشابه
An Analytical Model for Predicting the Convergence Behavior of the Least Mean Mixed-Norm (LMMN) Algorithm
The Least Mean Mixed-Norm (LMMN) algorithm is a stochastic gradient-based algorithm whose objective is to minimum a combination of the cost functions of the Least Mean Square (LMS) and Least Mean Fourth (LMF) algorithms. This algorithm has inherited many properties and advantages of the LMS and LMF algorithms and mitigated their weaknesses in some ways. The main issue of the LMMN algorithm is t...
متن کاملOn the Worst-case Divergence of the Least-squares Algorithm
In this paper, we provide a H 1 {norm lower bound on the worst{case identiication error of least{squares estimation when using FIR model structures. This bound increases as a logarithmic function of model complexity and is valid for a wide class of inputs characterized as being quasi{stationary with covariance function falling oo suuciently quickly.
متن کاملSuboptimal Algorithms for Worst Case Identiication in H 1 and Model Validation
New algorithms based on convex programming are proposed for worst case system identiication. The algorithms are optimal within a factor of two asymptotically. Further, model validation, or data consistency is embedded in the identiication process. Explicit worst case identiication error bounds in H 1 norm are also derived for both uniformly and nonuniformly spaced frequency response samples.
متن کاملAdaptive Estimation of Sparse Signals : where RLS meets the l 1 - norm †
Using the l1-norm to regularize the least-squares criterion, the batch least-absolute shrinkage and selection operator (Lasso) has well-documented merits for estimating sparse signals of interest emerging in various applications where observations adhere to parsimonious linear regression models. To cope with high complexity, increasing memory requirements, and lack of tracking capability that b...
متن کاملBlind Estimation of ARMA Systems
1 : In this paper, we present an approach to blind estimation of non-minimum phase ARMA models using fourth order statistics. The algorithm follows a Residual Time Series (RTS) procedure, which sequentially identiies the AR and MA parts. In step 1 of the RTS concept, the AR estimation is geared to deliver the shortest impulse response of the overall system, so that a new fast approach to MA sys...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1998